Human hand gestures are a widely accepted form of real-time input for devices providing a\nhuman-machine interface. However, hand gestures have limitations in terms of effectively conveying\nthe complexity and diversity of human intentions. This study attempted to address these limitations\nby proposing a multi-modal input device, based on the observation that each application program\nrequires different user intentions (and demanding functions) and the machine already acknowledges\nthe running application. When the running application changes, the same gesture now offers a new\nfunction required in the new application, and thus, we can greatly reduce the number and complexity\nof required hand gestures. As a simple wearable sensor, we employ one miniature wireless three-axis\ngyroscope, the data of which are processed by correlation analysis with normalized covariance for\ncontinuous gesture recognition. Recognition accuracy is improved by considering both gesture\npatterns and signal strength and by incorporating a learning mode. In our system, six unit hand\ngestures successfully provide most functions ordered by multiple input devices. The characteristics of\nour approach are automatically adjusted by acknowledging the application programs or learning user\npreferences. In three application programs, the approach shows good accuracy (90-96%), which is\nvery promising in terms of designing a unified solution. Furthermore, the accuracy reaches 100% as\nthe users become more familiar with the system.
Loading....